首页> 外文OA文献 >Heterogeneous Face Recognition by Margin-Based Cross-Modality Metric Learning
【2h】

Heterogeneous Face Recognition by Margin-Based Cross-Modality Metric Learning

机译:基于边缘的跨模态度量学习的异构人脸识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Heterogeneous face recognition deals with matching face images from different modalities or sources. The main challenge lies in cross-modal differences and variations and the goal is to make cross-modality separation among subjects. A margin-based cross-modality metric learning (MCM2L) method is proposed to address the problem. A cross-modality metric is defined in a common subspace where samples of two different modalities are mapped and measured. The objective is to learn such metrics that satisfy the following two constraints. The first minimizes pairwise, intrapersonal cross-modality distances. The second forces a margin between subject specific intrapersonal and interpersonal cross-modality distances. This is achieved by defining a hinge loss on triplet-based distance constraints for efficient optimization. It allows the proposed method to focus more on optimizing distances of those subjects whose intrapersonal and interpersonal distances are hard to separate. The proposed method is further extended to a kernelized MCM2L (KMCM2L). Both methods have been evaluated on an ID card face dataset and two other cross-modality benchmark datasets. Various feature extraction methods have also been incorporated in the study, including recent deep learned features. In extensive experiments and comparisons with the state-of-the-art methods, the MCM2L and KMCM2L methods achieved marked improvements in most cases.
机译:异构面部识别处理来自不同模态或来源的匹配面部图像。主要挑战在于跨模式的差异和变异,目标是使主体之间的跨模式分离。为了解决该问题,提出了一种基于余量的跨模态度量学习(MCM2L)方法。在公共子空间中定义了跨模式度量,在该子空间中映射并测量了两种不同模式的样本。目的是学习满足以下两个约束的度量。第一个最小化成对的人际跨模态距离。第二个强制主题特定的人际和人际跨模态距离之间的余量。这是通过在基于三重态的距离约束上定义铰链损耗以进行有效优化来实现的。它允许所提出的方法更多地集中于优化人际距离和人际距离难以分离的那些对象的距离。所提出的方法被进一步扩展到内核化的MCM2L(KMCM2L)。两种方法均已在ID卡人脸数据集和其他两个交叉模式基准数据集上进行了评估。各种特征提取方法也已纳入研究,包括最近的深度学习特征。在广泛的实验和与最新方法的比较中,MCM2L和KMCM2L方法在大多数情况下均取得了显着改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号